Search Results: "rudi"

14 April 2012

NeuroDebian: NeuroDebian nd* tools

NeuroDebian nd* tools One of the goals of NeuroDebian is to provide recent versions of scientific software on stable Debian (and Ubuntu) deployments. That is why we build (whenever possible) every new package not only for the Debian unstable (the entry point of packages into Debian) but also for Debian testing and stable, and Ubuntu releases. To automate such procedure we prepared few rudimentary wrappers around cowbuilder allowing to build packages in isolated environment. Also we provide a backport-dsc script to ease backporting with optional application of per-release patchsets. In this blog post we would like to introduce you to these tools. They will be of use for anyone working on a package intended to be uploaded to NeuroDebian repository or anyone interested to verify if package could be easily backported. With a single command you will be able to build a given Debian source package across distributions. As a result you will verify that there are no outstanding backport-ability issues or compatibility problems with core components (e.g. supported versions of Python) if your source package excercises test suites at build time.
Procedure
  • [1-20 min] If you are not running Debian-based distribution, Install NeuroDebian VM; otherwise just add apt sources for NeuroDebian repository.

  • [<1 min] Install the neurodebian-dev package providing nd* tools:

    sudo apt-get install neurodebian-dev
  • [1-5 min] Adjust default configuration (sudo vim /etc/neurodebian/cmdsettings.sh) used by nd commands to

    • point cowbuilderroot variable to some directory under brain account, e.g. ~brain/debs (should be created by you)
    • remove undesired releases (e.g. deprecated karmic) from allnddist and alldist
    • adjust mirror entries to use the Debian mirror and Ubuntu mirror of your choice or may be even point to your approx apt-caching server
  • [10-60 min] Create the COWs for all releases you left in the configuration file:

    sudo nd_adddistall
Building At this point you should be all set to build packages for all distributions with a single command. E.g.:
sudo nd_build4all blah_1-1.dsc
should take the .dsc file you provide, and build it for main Debian sid and all ND-supported releases of Debian and Ubuntu. nd_build4allnd would build only for the later omitting the vanilla Debian sid. The highlevel summary either builds succeed or failed get reported in summary.log in the same directory, pointing to .build log files for the corresponding architecture/release.
Troubleshooting Failing Build Provide --hookdir cmdline pbuilder argument to point to a hook which would get kicked in by pbuilder upon failure, e.g.:
sudo apt-get install git
git clone https://github.com/neurodebian/neurodebian
sudo nd_build4debianmain *.dsc -- --hookdir $PWD/neurodebian/tools/hooks
If you have any comments (typos, improvements, etc) feel welcome to leave a comment below, or contact us .

5 January 2012

Patrick Schoenfeld: Bringing GVFS to a good use

One of the GNOME features I really liked since the beginning of my GNOME usage is the ability to mount various network file system by a few clicks and keystrokes. It enables me to quickly access NFS shares or files via SFTP. But so far these mounts weren't actually mounts in a classical sense, so they were only rudimentary useful.

As a user who often works with terminals I was always halfway happy with that feature and halfway not:

- Applications have to be aware and enabled to make use of that feature, so its often neccessary to workaround problems (e.g. movie players not able to open a file on a share)
- No shell access to files

Previously this GNOME feature was realised with an abstraction layer called GNOME VFS, which all applications needed to use if they wanted to provide access to the "virtual mounts". It did no efforts to actually re-use common mechanisms of Un*x-like systems, like mount points. So it were doomed to fail at certain degrees.

Today GNOME uses a new mechanism, called GVFS. Its realized by a shared library and daemon components communicating over DBUS. At first glance it does not seem to change anything, so I was rather disappointed. But then I heard rumors, that Ubuntu was actually making these mounts available in a special mount point in ~/.gvfs.
My Debian GNOME installation were not.

So I investigated a bit and found evidence about a daemon called gvfs-fuse-daemon, which eventually is handling that. After that I figured this daemon to be in a package called gvfs-fuse and learned that installing it and restarting my GNOME session is actually all needed to do.
Now getting shell access to my GNOME "Connect to server" mounts is actually possible, which makes these mounts really useful after all. Only thing to find out is, if e.g. the video player example now works from Nautilus. But if it doesn't I'm still able to use it via a shell.

The solution is quiet obvious, on the one side. But totally non-obvious on the other.

A common user eventually will not find that solutin without aid. After all the package name does not really suggest what the package is used for, since its referring to technologies instead of the problem it solves. Which is understandable. What I don't understand is, why this package is not a dependency of the gnome meta package. But I haven't yet asked the maintainer, so I cannot really blame anybody.

However: Now GVFS is actually useful.

2 November 2011

Benjamin Mako Hill: Slouching Toward Autonomy

I care a lot about free network services. Recently, I have been given lots of reasons to be happy with the progress the free software community has made in developing services that live up to my standards. I have personally switched from a few proprietary network services to alternative systems that respect my autonomy and have been very happy both with the freedom I have gained and with the no-longer-rudimentary feature sets that the free tools offer. Although there is plenty left to do, here are four tools I'm using now instead of the proprietary tools that many people use, or that I used to use myself: In trying to switch away from proprietary services, I have found that there still a lack of good information comparing the different systems out there and giving folks advice on who might be able to help with things like setup or hosting. I really value hearing from other people about what they use and what they find useful but finding this information online still seems to be a struggle. The autonomo.us wiki seems like the natural place to host or summarize this discussion and to collect and share information useful for those of us slouching (or running) toward autonomy in our use of network services. I invite folks to get involved in improving that already useful resource. For example, this week, I spent a few hours researching free social bookmarking tools and produced a major update to the (already useful) social bookmarking page on the autonomo.us wiki. Of course, I can imagine lots of ways to improve that page and to collect similar information on other classes of network services. Please join me in that effort!

29 June 2011

Charles Plessy: BioPerl and regression tests

Two weeks ago, I updated the packages containing the BioPerl modules bioperl and bioperl-run, which allowed to resume the work done on GBrowse, one of the genome browsers available in Debian. As many Perl modules, BioPerl has extensive regression tests. Some of them need access to external services, and are disabled by default as the Internet is not available in our build farm. Nevertheless, they can be triggered through the environment variable DEB_MAINTAINER_MODE when building the package locally. Bioperl successfully passes all off-line tests, and a part of the on-line tests is already corrected its development branch. In contrary, BioPerl-Run fail the tests for Bowtie, DBA, EMBOSS, Genewise, Hmmer, PAML, SABlastPlus, T-Coffee and gmap-run. In some cases, like EMBOSS, it is because a command has been renamed in Debian. In other cases, in particular DBA et Genewise, it is much more difficult to figure out on which side is the problem. Regression tests are essential to determine if a package works or not on ports others than the one used by the packager (amd64 in my case). Even simple ones can be useful. In the case of T-Coffee, that has a rudimentary test that I have activated recently, it showed that the packages distributed on armel are not working at all. Running regression tests when building packages have advantages, in particular to have the results published for each port automatically as part of the build logs. But is also cause problems. First, a package would need to build-depend on every software it can interact with, in order to test it comprehensively. In the example of bioperl-run, it makes it impossible to build on armel as long as t-coffee is broken there. Second, this approach does not help the user to test similarly a program installed on his computer. Maybe the Debian Enhancement Proposal 8, to test binary packages with material provided by their source packages, will solve these two problems.

23 June 2011

Bastian Blank: Linux 3.0 and Xen

It took a long time to get all the parts of the Xen support into the Linux kernel. While rudimentary Dom0-support was available since 2.6.38, support for device backends were missing. It was possible to replace this backend with a userspace implementation included in qemu, but I never tested that. With Linux 3.0, both the traditional block backend and the network backend are available. They are already enabled in the current 3.0-rc3/-rc4 packages in experimental, so the packages can be used as Dom0 and run guests. Right now the backend modules are not loaded, so this still needs some work. Neither the init scripts loads them, because the names where in flux the last time I laid hand on it, nor does the kernel themself expose enough information to load them via udev. I think using udev to load the modules is the way to go. This step marks the end of a five year journey. Around 2.6.16 the Xen people started to stay really close to Linux upstream. With the 2.6.18 releas this stopped and the tree was pushed in different states into Debian Etch and RHEL 5. After that, Xen upstream ceased work on newer versions completely, only changes to the now old 2.6.18 tree where done. SuSE started a forward port of the old code base to newer kernel versions and Debian Lenny released with such a patched 2.6.26. Around that time, minimal support for DomU on i386 using paravirt showed up and Lenny had two different kernels with Xen support. Since 2.6.28 this support was mature and works rather flawless since. Somehow after that, a new port of the Dom0 support, now using paravirt, showed up. This tree based on 2.6.32 is released with Debian Squeeze. After several more rounds of redefining and polishing it is now mostly merged into the core kernel. I don't know what the future brings. We have two virtualization systems supported by Linux now. The first is KVM that converts the kernel into a hypervisor and runs systems with help of the hardware virtualization. The later one is Xen that runs under a standalone hypervisor and supports both para- and hardware virtualization. Both works, KVM is easier to use and even works on current System z hardware. It can be used by any user with hopefully enough margin of security between them. Xen's home is more suited for servers, where you don't have users at all. Both have advantages and disadvantages, so everyone have to decide what he needs, there is no "one size fits all".

7 June 2011

Lars Wirzenius: TDDD systest tool proof of concept

Continuing my earlier musings about test driven distro development, and the tools that would require. I imagine something like this: We might also want this scenario: There might be other scenarios that would be useful to test as well. Even from these two, it's clear there's a need for at least three separate tools: I've written a rudimentary version of the first tool: vmdebootstrap. I've since learned there's a bunch of others that might also work. There's room for more: someone should write (or find) a tool to make a snapshot of a real system and create a VM image that mimicks it, for example. Anyway, for now, I'll assume that one of the existing tools is good enough to get started. For the second tool, I wrote a quick-and-dirty proof-of-concept thing, see systest.py. Here's a sample of how it might be used:
liw@havelock$ ./systest -v --target 192.168.122.139 --user tomjon
test 1/6: cat
test 2/6: only-ssh-port
ERROR: Assertion failed: ['22/tcp', '139/tcp', '445/tcp'] != ['22/tcp']
[status 1]
liw@havelock$ ./systest -v --target 192.168.122.139 --user tomjon \
    cat ping6-localhost ping-localhost simple-dns-lookup ssh-login
test 1/5: cat
test 2/5: ping6-localhost
test 3/5: ping-localhost
test 4/5: simple-dns-lookup
test 5/5: ssh-login
liw@havelock$ 
The first run failed, because the VM I'm testing against has some extra ports open. Some of the tests will require logging into the machine, via ssh, and for that one needs to specify the user to use. systool may overlap heavily on system monitoring tools, and possibly the production implementation should be based on those. I think it's best to design such a tool for the more general purpose of testing whether a system currently works rather than as an integrated part of a more specific larger tool. This lets the tool be useful for other things than just testing specific things about Debian. (The production implementation would then need to not have all the tests hardcoded, of course. SMOP.) The third tool I have not spent a lot of time on yet. One thing at a time. Given these tools, one would then need to decide how to use them. The easiest way would be to use them like lintian and piuparts: run them frequently on whatever packages happen to be in testing or unstable or experimental, and put links to the test reports to the PTS, and hope that people fix things. That is the easiest way to start things. Once there's a nice set of test cases and scenarios, it may be interesting to think about more aggressive ways: for example, preventing packages from migrating to testing unless the test suite passes with them. If the tests do not pass, one of four things is broken: If things are set up properly, the last one should be rare. The other three always require manual inspection: it is not possible to automatically know whether the test itself, or the code it tests, is at fault. it is, however, enough to know that something is wrong. If the tests are written well, they should be robust enough to not be the culprits very often. (Someone wanting to make a rolling distrbution, or even better, making a monthly mini-release, might benefit from this sort of automated testing.)

2 June 2011

Craig Small: What happens without software testing

New TurboGears logo

Image via Wikipedia

Well jffnms 0.9.0 was a, well, its a good example of what can go wrong without adequate testing. Having it written in PHP makes it difficult to test, because you can have entire globs of code that are completely wrong but are not activated (because the if statement proves false for example) and you won t get an error. It also had some database migration problems. This is the most difficult and annoying part of releasing versions of JFFNMS. There are so many things to check, like: I ve been looking at sqlalchemy which is part of turbogears. It s a pretty impressive setup and lets you get on with writing your application and not mucking around with the low-level stuff. It s a bit of a steep learning curve learning python AND sqlalchemy AND turbogears but I ve got some rudimentary code running ok and its certainly worth it (python erroring on un-assigned varables but not forcing you to define them is a great compromise). The best thing is that you can develop on a sqlite database but deploy using mysql or postgresql with a minor change. Python and turbogears both emphasise automatic testing. Ideally the testing should cover all the code you write. The authors even suggest you write the test first then implement the feature. After chasing down several bugs, some of which I introduced fixing other bugs, automatic testing would make my life a lot easier and perhaps I wouldn t dread the release cycle so often.

13 May 2011

Lars Wirzenius: vmdebootstrap, for real

Some months ago I wished for a tool to create virtual disk images, something similar to debootstrap except it would create a disk image instead of a directory. It would be used like this:
sudo ./vmdebootstrap --image test.img --size 1G \
--log test.log --mirror http://mirror.lan/debian/
debootstrap is wonderful for creating a basic Debian system for use with chroot. In the modern world, something similar but for virtual machines (such as KVM) would be nice to have, I thought. Particularly if it is something that is not tied to a particular virtualization technology, and does not require running in an actual virtual machine (for speed and simplicity). I could imagine using a tool like this for automatically building appliance-like system images, for example for FreedomBox. Being able to do that in a fully automatic manner would make development simpler and more easy for others to reproduce, and would allow for things like automatic integration testing: build image, boot it in a virtual machine, run automatic tests, report results. There are a bunch of tools that almost do this, but not quite. For example, live-build builds a disk image, but it's one that is aimed for live CDs and such. This brings in some limitations. By default, the image is not persistent. You can make a persistent image, but that requires you to provide a second partition (or disk) that holds the modified parts, using copy-on-write. It also means that if the bootloader or initramfs needs to be updated, then the read-only image needs to be updated. That's fine for live CDs, but not so good if you want a real installed system that you can upgrade to all future releases of Debian. (Ben Armstrong kindly helped me use live-build the right way. Thanks!) There's also grml-debootstrap which seems to work on a real system, but not installing onto a virtual disk image (and the difference is crucial to me). Based on feedback I received back then, and some experimentation I've done the past couple of days, I wrote a very rudimentary version of the tool I want. The code is on gitorius. (You need my cliapp library to run it.) Be warned: it really is a very rudimentary version. For example, it provides not flexibility in choice of partitioning, filesystem type, what packages get installed, etc. It does not configure networking, and the root password is disabled. Some of that flexibility would be quite easy to add. It also uses extlinux as the bootloader, since I utterly failed to get grub2 to install onto the virtual disk. (I did, however, manage to overwrite my develpment machine's bootloader.) The command above works, and results in a RAW format disk image that boots under KVM. It might boot off a hard disk or USB flashdrive too, but I haven't tried that. I now have several questions:

31 March 2011

Jordi Mallach: A tale of Trist nia and its Quadrennial Royal Ball

In one of the corners of what is now know as Europa, there was a rich, prosperous and beautiful kingdom known as Trist nia. In the past, not that long ago, it had been a number of smaller kingdoms and caliphates, all with their own cultures, religions and ways of life. Wars, and series of marriages of convenience eventually configured what ended up being the united kingdom of Trist nia. Throughout the years, some of the unified cultures grew and flourished, while others struggled to survive in their ever-shrinking areas of influence. A required introduction Sometimes, the minor cultures would suffer due to oppression coming from the delegates of the King, who would ban any expression of these cultures, as they were seen as a potential threat to the kingdom's stability and unity. For example, just a few decades before the main subject of this tale, the predecessor of the incumbent King took power by force, after crushing everyone who opposed his uprise during a bloody and hard civil war. His reign was ruthless and he imposed draconian laws uppon his people: usage and teaching of the minor languages was banned, and everyone was forced to use the language of the Centr lia region, in public or private. After four decades, the majority of the Tristanian people were sick enough of the situation to consider standing against their fear of the regime and demand freedom, but repression prevailed until the old general died. His place was taken by the King's grandson even if the people had expressed, just before the Great War, that they had had enough kings and demanded a ruler they could choose directly. Of course, the new King seemed a lot nicer than who they had been suffering for ages, so when asked if they accepted the new situation, an overwhelming majority said yes . However, there was a region, Verd lia, where the majority said no . Things were actually more complicated. Verdalians formed a traditional, proud society, and while the years of oppression had undoubtedly weakened it, they had managed to preserve their very unique culture, language and traditions healthy. The Verdal language was really weird to the ears of Centr lians and even other minor cultures of the Kingdom, and erudites struggled to find its real origins, not being able to reach plausible conclusions. Verdanians, as we already know, were a traditional society, living in a land of deep and poorly connected valleys. Little they knew or cared about the complicated matters of Centr lia and other regions. What made them happy was to take care their sheep and cows, keep a good fire in their living room and, every now and then, enjoy one of their log cutting contests. The impositions of the former dictator were too much for them, and some of them started sabotaging, assaulting and killing some of the dictator's soldiers, agents and officers. This was a huge risk at the time; getting caught meant death penalty for sure, and at first, even people from other regions were in favour of these actions. However, this popular support greatly diminished when the new King took the throne, as these minority continued with the killings, while most of the people saw it was no longer justified. The Royal Ball One of the very first measures the young King introduced was to organise the Royal Ball of Trist nia , a major event through which the people of the different regions would be able to elect their delegates to the Crown. Every four years, a Great Ball contest would happen in Centr lia, and the winners would be able to decide by their own on some of the matters that affected their region. Verdanians would send a few teams of dancers, each of which came from different towns or areas. Some Verdanian teams were happy about the King and the new political situation, but other teams weren't so much. And some others, while being simple non-violent dancers, were known to be supporters of the violent minority who kept on harassing, assaulting and even killing in their struggle for freedom of Verd nia . The Verdanian groups aligned with the different culture of Verd nia (including those who were said to support the violent) tended to get a lot more points in the dancing contest, and a majority of the elected delegates were appointed by them, making it easier to pass laws and edicts that favoured protection of their ways, traditions and language. No matter how hard they tried, the dancing groups closer to Centr lia kept losing to the majority. After many years of dance contests, these groups used their closeness to the King's court to pass the Ball Law of Tristanian, that would ban any dancing group which didn't condemn the assaults and killings that kept happening in Verd nia. The unsurprising result was that, with less dancing groups participating in the following Royal Ball, the Verdanian majority was broken and new delegates, friendly of the Centralian officers, were elected. Many people who had been in favour of assaults and killings began to question this strategy, and this political movement's unity started to break. In the end, the dancers decided to part ways with the violent; they wanted to dance in the next ball, and to do so, they wrote a letter to the King, in which they explicitly expressed their rejection of violent ways, and their embracing of dancing as the only means to drive their political agenda. An objective reading of the new Ball Law clearly showed that this was enough: the text only said the requisite for a dancing group was to disavow all kinds of violence. This wasn't really expected in Centr lia, so they started to add new requirements in an attempt to keep this group from the contest: their decisive majority in Verd nia was at stake. The Royal Ball was nearing and registrations for the contest would soon close. The Centralian government first argued that the dancing group should reject the violence coming from the Verdanian extremists in particular. The dancers did it. Then they argued that the dancers were the same people who had been supporting violence in Verd nia for years, and obviously their violence rejection statement was a lie. The dancers struggled to find new dancers who had not been involved in past dances. But it was not enough. They then claimed that this dance group should be quarantined for four years, until they could prove they really were serious about their new non-violent ideas. The dance group made a plea to the Trist nian Supreme Counsel, a group of sixteen experts in law of the Kingdom, and argued that all of these draconian requirements were not part of the law that was being enforced by the King. Their appeal to the elder counselors was in vain, though. They ruled this dancing group was as criminal as the violent minority they had once supported, and should by no means take part in the Royal Ball. As a last, desperate measure, the dancing group reached an agreement with other Verdanian dancers to join forces. They would adopt a new name and new dancing costume colours. Many feared this would only end up in the ban of the other dancing group. Unfortunately, the end of this story has not been written yet, but it will be completed very soon. Only time will tell if things continue being very sad and unfair in Trist nia, or if the dance contest will once again be impartial, with legitimate results.

28 August 2010

Thorsten Glaser: mksh, encodings, MirBSD, BitTorrent, WinCE

mksh was merged into Android (both AOSP and Google s internal master tree) in the night 24/25th August, and is expected to be the one shell to rule them all, for Gingerbread. mksh(1) now also has a cat builtin, for here documents mostly. It calls the cat(1) command if it receives any options. The shell is nevertheless smaller than yesterday because of improved string pooling. There s another reason to use the MirOS OPTU-16 encoding instead of PEP 383, on which I already wrote: try passing a wide-char filename to a function such as MessageBoxW, or create a filename on a system using wide chars, such as FAT s LFN or ISO 9660 s Joliet, or one that only allows Unicode (canonically decomposed u out of all things) like HFS+. OPTU-8 at least maps to somewhat reserved codepoints (would, of course, be better to get an official 128 codepoint block, but the chance s small of getting that in the BMP). Still. Oh well, the torrents. I ve remade them all, using one DHT seed node and OpenBitTorrent as tracker and put them on a very rudimentary BT page that will be completely redone soonish. Please re-download them. I currently do not believe f.scarywater.net will return. Finally, I fell victim to a selling-out and may have just bought a Windows Mobile 6 based phone (Glofiish X650) and an SDHC card and an extra battery with double capacity. Well, at least it s said to run CacheWolf well. I still would like to have something like Interix, Cygwin, UWIN, coLinux, or maybe some qemu-for-WinCE variant that runs Android, Maemo, Debian/armhf (or armel or arm) at near-native speed (and is usable the device sadly doesn t have a hardware keyboard, but it comes with SiRFstar GPSr). It only has 64 MiB RAM, like the Zaurus SL-C3200 and the jesusPhone, though. Any chance to get MirWorldDomination onto that device as well?

16 May 2009

Antti-Juhani Kaijanaho: This is Alue

I have made a couple of references in my blog to the new software suite I am writing, which I am calling Alue. It is time to explain what it is all about. Alue will be a discussion forum system providing a web-based forum interface, a NNTP (Netnews) interface and an email interface, all with equal status. What will be unusual compared to most of the competition is that all these interfaces will be coequal views to the same abstract discussion, instead of being primarily one of these things and providing the others as bolted-on gateways. (I am aware of at least one other such system, but it is proprietary and thus not useful to my needs. Besides, I get to learn all kinds of fun things while doing this.) I have, over several years, come across many times the need for such systems and never found a good, free implementation. I am now building this software for the use of one new discussion site that is being formed (which is graciously willing to serve as my guinea pig), but I hope it will eventually be of use to many other places as well. I now have the first increment ready for beta testing. Note that this is not even close to being what I described above; it is merely a start. It currently provides a fully functional NNTP interface to a rudimentary (unreliable and unscalable) discussion database. The NNTP server implements most of RFC 3977 (the base NNTP spec IHAVE, MODE-READER, NEWNEWS and HDR are missing), all of RFC 4642 (STARTTLS) and a part of RFC 4643 (AUTHINFO USER the SASL part is missing). The article database is intended to support with certain deliberate omissions the upcoming Netnews standards (USEFOR and USEPRO), but currently omits most of the mandatory checks. There is a test installation at verbosify.org (port 119), which allows anonymous reading but requires identification and authentication for posting. I am currently handing out accounts only by invitation. Code can be browsed in a Gitweb; git clone requests should be directed to git://git.verbosify.org/git/alue.git/. There are some tweaks to be done to the NNTP frontend, but after that I expect to be rewriting the message filing system to be at least reliable if not scalable. After that, it is time for a web interface.

14 May 2009

Antti-Juhani Kaijanaho: Asynchronous transput and gnutls

CC0
To the extent possible under law,
Antti-Juhani Kaijanaho has waived all copyright and related or neighboring rights to
Asynchronous transput and gnutls. This work is published from Finland.
GnuTLS is a wonderful thing. It even has a thick manual but nevertheless its documentation is severely lacking from the programmer s point of view (and there doesn t even seem to be independent howtos floating on the net). My hope is to remedy with this post, in small part, that problem. I spent the weekend adding STARTTLS support to the NNTP (reading) server component of Alue. Since Alue is written in C++ and uses the Boost ASIO library as its primary concurrency framework, it seemed prudent to use ASIO s SSL sublibrary. However, the result wasn t stable and debugging it looked unappetizing. So, I wrote my own TLS layer on top of ASIO, based on gnutls. Now, the gnutls API looks like it works only with synchronous transput: all TLS network operations are of the form do this and return when done ; for example gnutls_handshake returns once the handshake is finished. So how does one adapt this to asynchronous transput? Fortunately, there are (badly documented) hooks for this purpose. An application can tell gnutls to call application-supplied functions instead of the read(2) and write(2) system calls. Thus, when setting up a TLS session but before the handshake, I do the following:
                gnutls_transport_set_ptr(gs, this);
                gnutls_transport_set_push_function(gs, push_static);
                gnutls_transport_set_pull_function(gs, pull_static);
                gnutls_transport_set_lowat(gs, 0);
Here, gs is my private copy of the gnutls session structure, and the push_static and pull_static are static member functions in my sesssion wrapper class. The first line tells gnutls to give the current this pointer (a pointer to the current session wrapper) as the first argument to them. The last line tells gnutls not to try treating the this pointer as a Berkeley socket. The pull_static static member function just passes control on to a non-static member, for convenience:
ssize_t session::pull_static(void * th, void *b, size_t n)
 
        return static_cast<session *>(th)->pull(b, n);
 
The basic idea of the pull function is to try to return immediately with data from a buffer, and if the buffer is empty, to fail with an error code signalling the absence of data with the possibility that data may become available later (the POSIX EAGAIN code):
class session
 
        [...]
        std::vector<unsigned char> ins;
        size_t ins_low, ins_high;
        [...]
 ;
ssize_t session::pull(void *b, size_t n_wanted)
 
        unsigned char *cs = static_cast<unsigned char *>(b);
        if (ins_high - ins_low > 0)
         
                errno = EAGAIN;
                return -1;
         
        size_t n = ins_high - ins_low < n_wanted
                ?  ins_high - ins_low
                :  n_wanted;
        for (size_t i = 0; i < n; i++)
         
                cs[i] = ins[ins_low+i];
         
        ins_low += n;
        return n;
 
Here, ins_low is an index to the ins vector specifying the first byte which has not already been passed on to gnutls, while ins_high is an index to the ins vector specifying the first byte that does not contain data read from the network. The assertions 0 <= ins_low, ins_low <= ins_high and ins_high <= ins.size() are obvious invariants in this buffering scheme. The push case is simpler: all one needs to do is buffer the data that gnutls wants to send, for later transmission:
class session
 
        [...]
        std::vector<unsigned char> outs;
        size_t outs_low;
        [...]
 ;
ssize_t session::push(const void *b, size_t n)
 
        const unsigned char *cs = static_cast<const unsigned char *>(b);
        for (size_t i = 0; i < n; i++)
         
                outs.push_back(cs[i]);
         
        return n;
 
The low water mark outs_low (indicating the first byte that has not yet been sent to the network) is not needed in the push function. It would be possible for the push callback to signal EAGAIN, but it is not necessary in this scheme (assuming that one does not need to establish hard buffer limits). Once gnutls receives an EAGAIN condition from the pull callback, it suspends the current operation and returns to its caller with the gnutls condition GNUTLS_E_AGAIN. The caller must arrange for more data to become available to the pull callback (in this case by scheduling an asynchronous write of the data in the outs buffer scheme and scheduling an asynchronous read to the ins buffer scheme) and then call the operation again, allowing the operation to resume. The code so far does not actually perform any network transput. For this, I have written two auxiliary methods:
class session
 
        [...]
        bool read_active, write_active;
        [...]
 ;
void session::post_write()
 
        if (write_active) return;
        if (outs_low > 0 && outs_low == outs.size())
         
                outs.clear();
                outs_low = 0;
         
        else if (outs_low > 4096)
         
                outs.erase(outs.begin(), outs.begin() + outs_low);
                outs_low = 0;
         
        if (outs_low < outs.size())
         
                stream.async_write_some
                        (boost::asio::buffer(outs.data()+outs_low,
                                             outs.size()-outs_low),
                         boost::bind(&session::sent_some,
                                     this, _1, _2));
                write_active = true;
         
 
void session::post_read()
 
        if (read_active) return;
        if (ins_low > 0 && ins_low == ins.size())
         
                ins.clear();
                ins_low = 0;
                ins_high = 0;
         
        else if (ins_low > 4096)
         
                ins.erase(ins.begin(), ins.begin() + ins_low);
                ins_high -= ins_low;
                ins_low = 0;
         
        if (ins_high + 4096 >= ins.size()) ins.resize(ins_high + 4096);
        stream.async_read_some(boost::asio::buffer(ins.data()+ins_high,
                                                   ins.size()-ins_high),
                               boost::bind(&session::received_some,
                                           this, _1, _2));
        read_active = true;
 
Both helpers prune the buffers when necessary. (I should really remove those magic 4096s and make them a symbolic constant.) The data members read_active and write_active ensure that at most one asynchronous read and at most one asynchronous write is pending at any given time. My first version did not have this safeguard (instead trying to rely on the ASIO stream reset method to cancel any outstanding asynchronous transput at need), and the code sent some TLS records twice which is not good: sending the ServerHello twice is guaranteed to confuse the client. Once ASIO completes an asynchronous transput request, it calls the corresponding handler:
void session::received_some(boost::system::error_code ec, size_t n)
 
        read_active = false;
        if (ec)   pending_error = ec; return;  
        ins_high += n;
        post_pending_actions();
 
void session::sent_some(boost::system::error_code ec, size_t n)
 
        write_active = false;
        if (ec)   pending_error = ec; return;  
        outs_low += n;
        post_pending_actions();
 
Their job is to update the bookkeeping and to trigger the resumption of suspended gnutls operations (which is done by post_pending_actions). Now we have all the main pieces of the puzzle. The remaining pieces are obvious but rather messy, and I d rather not repeat them here (not even in a cleaned-up form). But their essential idea goes as follows: When called by the application code or when resumed by post_pending_actions, an asynchronous wrapper of a gnutls operation first examines the session state for a saved error code. If one is found, it is propagated to the application using the usual ASIO techniques, and the operation is cancelled. Otherwise, the wrapper calls the actual gnutls operation. When it returns, the wrapper examines the return value. If successful completion is indicated, the handler given by the application is posted in the ASIO io_service for later execution. If GNUTLS_E_AGAIN is indicated, post_read and post_write are called to schedule actual network transput, and the wrapper is suspended (by pushing it into a queue of pending actions). If any other kind of failure is indicated, it is propagated to the application using the usual ASIO techniques. The post_pending_actions merely empties the queue of pending actions and schedules the actions that it found in the queue for resumption. The code snippets above are not my actual working code. I have mainly removed from them some irrelevant details (mostly certain template parameters, debug logging and mutex handling). I don t expect the snippets to compile. I expect I will be able to post my actual git repository to the web in a couple of days. Please note that my (actual) code has received only rudimentary testing. I believe it is correct, but I won t be surprised to find it contains bugs in the edge cases. I hope this is, still, of some use to somebody :)

4 January 2009

Andrew Pollock: [tech] Mixing electricity and water: monitoring the cat water bowl with Nagios

(this is something I've had "in production" for many months now, I just haven't had the time or energy to do a proper write up about what I did) We have a cat water bowl, it looks like this: The cat water bowl Under "normal" circumstances, it usually lasts about seven days. So when our weekly routine is happening, we'll refill it on a Saturday whilst doing house chores, and it'll last until the following Saturday when it gets refilled. Unfortunately, sometimes our routine gets disrupted, and we forget. Sometimes, we travel and have a house-sitter, who may not pay as close attention to such things as ourselves. Once, one of our cats was licking the condensation off a chilled bottle of soft drink that was on the kitchen counter one evening, before we realised the water bowl needed refilling. Naturally, we felt like terrible pet owners. So I think it was some time around the 2008 Maker Faire, that I hatched the idea of having some sort of water sensor on the cat bowl, which would communicate to one of the various computers in house. At the Maker Faire, I bought a copy of Making things talk, and an Arduino starter kit, which consisted of a Diecimila board and a make-it-yourself proto shield. I also bought a little electronics starter kit, which consisted of a breadboard and various components, and a USB-TTL cable. I decided to use Bluetooth to communicate with the board, as I already had my MythTV setup using a Bluetooth keyboard and mouse, and it was within range of the water bowl. I decided against using Zigbee, because I didn't know anything about it, and I didn't want to add (or learn about) yet another wireless infrastructure just for this project. I should point out that I know very little about electronics. I'd never owned (or really used) a soldering iron until I embarked on this project. I took a basic soldering class at the Tech Shop, but I'd already assembled the proto shield by the time I took the class, so I'd pretty much figured it out. I had a very naive vision that I could just basically shove two wires in some water and it'd close a circuit and that would be my water sensor. Of course this didn't work, so I started hunting around on the Internet for a circuit that would do this. I happened upon a circuit (I don't seem to have retained the URL, so I can't link to it), which just consisted of a couple of transistors and some resistors. So I headed off to Fry's to try and buy the transistors I needed. I quickly discovered that I didn't have sufficient information to identify the transistors that I wanted, but I did happen to stumble upon a cheap assemble-it-yourself water alarm. It consisted of a PCB, and a transistor and some resistors and a buzzer. I bought a couple of these instead. Between studying the PCB and the circuit diagram that came with the alarm, I was able to reproduce it on my breadboard instead of on the PCB. Sure enough, placing the two probes in water closed the circuit. I replaced the buzzer with an LED so I could see what was going on. I attached the circuit to the Arduino proto shield, and had it feed into one of the digital I/O ports. I wrote some quick and dirty Wiring code so that when water not present (i.e. the circuit was open and no current was detected on that I/O port) the LED was switched on. Really at this point, I didn't need a microcontroller, I could have presumably achieved the same thing with a NOT gate. At this point, I wanted to make the sensor remotely queryable. I bought a BlueSMiRF Silver Bluetooth modem, which I attached to the TX and RX lines of the board (I first configured it by attaching the USB-TTL cable to it and using Minicom on my laptop). I extended the Wiring code to provide a rudimentary prompt, and accept a command to check if water was present. I think around this time it also dawned on me that I could use the digital I/O pins as a switch. When they're "high" they provide power. So rather than constantly running a current through the water, I only needed to briefly power up the water detection circuit, see if the circuit closed or not, and then report if water was present if it did. I much preferred this, as at the time, I was endeavouring to power the whole sensor off a 9 volt battery. I figured I'd get much better battery life if I wasn't running a current through the water the entire time. I should point out that I did some "tongue tests" in a glass of water while the circuit was powered up, and couldn't detect a difference between when the circuit was on or off. The last thing I wanted to be doing was zapping the cats! At this point, the Wiring and Arduino work was pretty much complete. I setup ser2net on the MythTV server, so that I could just connect to port 4000, and be connected to the water sensor.
apollock@icarus:~$ telnet teevee 4000
Trying 172.16.0.9...
Connected to teevee.andrew.net.au.
Escape character is '^]'.
ser2net port 4000 device /dev/rfcomm0 [9600 N81] (Debian GNU/Linux)
waterbowl> s
Water is present
waterbowl>
telnet> q
Connection closed.
apollock@icarus:~$
The Wiring code running on the Arduino board is checked in here. Next, I wanted to monitor this with Nagios. One thing I found with the Bluetooth connection was that it wasn't all that reliable. Not every connection to port 4000 resulted in a connection with the water sensor. I elected to write some standalone code that submitted results to Nagios by way of a passive check, rather than having Nagios try to actively monitor it. Again, trying to conserve energy, I decided to only check the sensor once every 8 hours. So I wrote a Python daemon, which tried to be as robust as possible. If at first it doesn't make a connection on its 8 hourly schedule, it keeps trying until it does. Nagios itself has some freshness detection for this monitor, so if no passive result is submitted within 8 hours and one minute, Nagios alerts that something is possibly wrong with the sensor itself. (The original intent was to deal with the situation where the battery went flat) This is the Nagios service definition I've got. Some of it may be unnecessary or redundant:
define service  
        host_name                       teevee
        service_description             Cat water bowl
        check_command                   check_stale!2!"Check water bowl monitor is on and reachable"
        normal_check_interval           480
        notification_interval           240
        active_checks_enabled           0
        check_freshness                 1
        freshness_threshold             28860
        max_check_attempts              1
        check_period                    24x7
        use                             generic-service
        stalking_options                o,w,c
        contact_groups                  everyone
 
The code for the Python daemon is checked in here. So everything was going swimmingly, apart from I couldn't get even 24 hours of monitoring on my conservative 3 times a day schedule, and the Bluetooth modem configured for all of its power saving options, and the water sensor itself only being powered on when it was performing a check. There wasn't much more I could do to try and reduce power consumption, at least with my limited electronics knowledge. Perhaps using Zigbee instead of Bluetooth would have helped, but I still think I'd have been going through 9 volt batteries more quickly than I'd have liked. I also had a brief foray into solar powering the board. I thought maybe the lighting in the house would be sufficient to power the board. Of course this didn't work out. I'd also have needed to make the code running on the board more self-sufficient, and have it just provide a water status indication whenever it had enough power to do so, instead of being a fairly dumb device like it currently is. This all felt a whole level more complicated, and out of my league, and I wasn't interested in attempting this sort of remote-sensing exercise at this time. The Arduino board can also be powered by USB, so as I already had some long USB type A to type B cables (that had funky lights in them to boot), I decided to just get a wall wart that had a USB type A receptacle, and powered the Arduino board that way. So much for being completely "wireless". (I could have also gotten a general purpose DC power supply that was capable of spitting out 9 volts, but I doubted I'd get one with a sufficiently long cable, which is why I went for the USB-powered option, as I already had a cable long enough) Speaking of wires, the most challenging part was the probes for the water sensor. As they were going to be permanently in the cats' drinking water, I didn't want to contaminate the water with them. I figured plain untinned copper wire would be okay, since water pipes are copper. Finding untinned, unstranded copper wire was a real challenge. I started out using some FM antenna cable, but that was stranded, and the shielding was a nightmare to strip. It was also reasonably difficult to make go where I wanted it. What I really wanted was something more solid, that would flex and stay where I put it. I cannibalised a spare IEC power cable, but it was also stranded copper wire. I finally managed to obtain some solid-core CAT-5 cable from a hardware store, and this has worked exactly as I wanted. I haven't done any further work on the setup since getting the CAT-5 cable for the probe. Further improvements that I'd like to do at some point: The finished product I had a lot of fun with this project. I had a real sense of achievement, being able to go from concept to completion, and learn a few things about electronics along the way. I'm normally not a fan of messing around with hardware. Some photos of the project are here.

30 September 2008

Adeodato Sim : French lessons

I’m practically 4 courses away from finishing my degree (the course I failed in June, I passed a couple weeks ago). 4 courses which I loathe, but that I’ll get done this year. After them, I still have to prepare something akin to a “final project”, but that doesn’t worry me much, since it’ll be something I enjoy. Apart from these 4 courses, I also have to take a couple non-computer science ones, whichever I want. I’ve decided to go for French lessons, since I’ve always wanted to learn French. I’m very excited for this. Today was my first day. I had some previous, incredibly rudimentary notions of French already, but that didn’t help not to find it a bit daunting at first: there’s so much to learn. (I can’t remember at all how I felt when I started studying English, but alas, I was a kid, when you’re taught stuff you know zero about.) I decided, though, to look it from a positive note, and make a pleasant experience out of it: not everyday one has the opportunity to dive into something completely new. I remained excited for the rest of the class, and I’m sure my classmates thought, “Why is this guy stupidly smiling from time to time?” (These are courses designed for freshmen, and I felt out of place. It seems 8 years it’s a lot of time.) Incidentally, my sister speaks French, and she’s lent me some books and dictionarys from her time as a student. Some of these were books from Le Petit Nicolas series; I had read all of them during my childhood, in Spanish. I glanced through the French versions, and recognizing every single picture in them as something I had already seen, albeit fifteen years ago, was a very weird feeling.

1 July 2008

Decklin Foster: Did you see what he was wearing? Oh. My. God.

Debian shall soon have a Conkeror package, thanks to Axel Beckert who takes a minute to break down the current keymap. Naturally, you have to poke fun at vi users here. But wait! I am a vi user! What can I say, except maybe
  1. Emacs (the rudiments, anyway) is like riding a bicycle
  2. When I say vi, i mean vile, not vim. vim gives me hives. vile is teh awesome.
I am working on a set of vile-ish bindings, and I can't say I feel any pressing need to stick hjkl in. You could start from there, but that's missing the point, I think. (You know what's also awesome? My email is still down, so I won't even have to delete flames from people who take their choice of editor/browser Very Seriously until sometime tomorrow.)

9 June 2008

Igor Genibel: Premi re s ance

Lev t t pour tre frais et dispo pour la premi re s ance, je me suis pr par rapidement. La nuit a t assez courte, couch vers minuit pour finaliser le support de cours. Enfin, il est pr t. J'ai juste quelques ajustements y apporter et il partira pour l'impression. Mieux vaut tard que jamais ;) J' tais donc pr t 7h30 pour un petit d jeuner ma foi agr able au bord de la piscine et accompagn de Tisserin et de la superbe Grue Couronn e quelques dizaines de centim tres de moi ;) Un petit tour au coin wifi internet pour envoyer 2 ou 3 mails et me voil attendre. Tout compte fait, j'ai d attendre jusqu' 10h30 pour que l'on vienne me chercher. La salle n' tait pas disponible, les petits tracas administratifs de derni re minute. Une fois sur place l'ENA (Ecole Nationale de l'Administration) je me suis lanc pr senter le syst me 25 personnes. Mais impossible de faire fonctionner loe r troprojecteur mis ma disposition. Pb de configuration Xorg voir ce soir. Donc j'ai projet mes transparent fait avec MagicPoint g n r s en HTML sur une machine sous Windows. Apr s une br ve introduction nous voil partis dans le vif du sujet. Les stagiaires taient tout ou e et donc je n'ai pas m nag les informations dispens es. Repas 14h frugal pour ma part, je n'avais pas trop faim. Poulet en sauve, crudit s, riz, pommes de terre. Me voil reparti 15h pour la deuxi me partie de la journ e. J'ai demand aux stagiaires de se pr senter... Bon, il va falloir que je fasse les bases des syst mes Unix pour tout le monde sans exception. Seance fini vers 18h o je suis rentr l'h tel, seul. Donc au programme de la soir e, remplir mon billet quotidien, r cuper la conf Xorg me permettant d'utiliser le r troprojecteur, manger et ensuite rododo ;)

20 January 2008

Martin-&#201;ric Racine: Miscellanea

Things have been rather hectic lately, so I haven't found much time to blog. Here's why: DBE62 The Gigabit Ethernet version of our thin client took more time to produce than I expected, for a number of reasons mostly related to a few improvements we decided to squeeze into the design at the last minute. However, today, we finally reached a point where LinuxBIOS runs as well as it did on our previous DBE61 model and where we no longer need any DOS tool to flash the MAC address into the VIA velocity Gigabit chip we selected. Hurray! Production will only commence in one month, but I'm already excited by the new model's potential, both as a thin client and as an embedded platform. Another good thing is that, thanks to Ubuntu developer Scott Balneaves, we managed to get all the necessary tools to support thin clients based on LinuxBIOS into LTSP, so our Etherboot model works out of the box on Ubuntu, since Gutsy. Hurray! There is one remaining issue related to recent changes in X.org core functionalities that make the AMD driver we need unstable but, again, various AMD, Debian and Ubuntu developers are looking into fixing this, so we should soon have spotless Geode support into Debian and Ubuntu again. T rkiye I visited Turkey twice over the last few months, because I'm putting together a pilot project to better promote the Estonian high-tech sector abroad, in collaboration with the Estonian government. I have to say that the more I visit Turkey, the more I like the place and the more understand why these people see themselves as Europeans because, you know what? They are: practically every significant civilization and religion that is at the core of European culture had major events taking place in in Anatolia or Thrace and, also, a devastatingly huge percentage of the consumer goods sold in Europe are designed and manufactured in Turkey. Learning the rudiments of Turkish has also proven to be a lot of fun. While I'm nowhere near as fluent in Turkish as in Finnish or even in Estonian, the learning curve isn't as steep as I initially expected: Altaic and Ugric languages share a surprising amount of grammatical concepts, while Turkish itself borrowed a lot of vocabulary from French, because the founder of modern Turkey, Atat rk, was very fond of the language. I'll venture that proximity with nearby Middle-Eastern countries that were formerly under French influence has something to do with it too. Identity crisis To me, the most challenging part of these business missions abroad is to represent a whole economic sector from a country of which I'm not a citizen or even a resident. Case in point: Being invited to dinner by a Turkish investor, I noticed the waiter asking my host where his foreign guest might be from. A few minutes later, as the waiter put down a gigantic pita bread with the word "Estonia" spelled in roasted sesame seeds, my host asked, reading my business card: Looking at the waiter and pointing at the gigantic pita, he continues: Honestly, trying to keep a straight face while saying "We" about a country of which I'm not a citizen and where I don't even reside becomes unbearable. At some point, some European bureaucrats will have to admit that I need a new citizenship, to reduce the confusion and to let me find myself a proper national identity again; the sooner, the better. Besides, the absurdity of the situation keeps on jumping at everyone's face: during the second mission to Turkey, I kept on bumping into Finnish diplomats who took personal offense at me for living in their country and yet representing the interests of a competing, neighboring country. If you ask me, I cannot entirely blame them for it. However, as far as I'm concerned, I've done my homeworks: I've been here 10 years, I speak the language and I don't have a criminal record. Given this, you'd think that acquiring citizenship would be a mere formality, but the Ulkomaalaisvirasto doesn't see it that way. If you ask me, this country's very first Minister of Immigration, Mrs.Astrid Thors, ought to unilaterally grant citizenship to anyone who's lived here for at least 5 years, just for the asking, regardless of what circumstances brought them here or of what absurd decisions the Ulkomaalaisvirasto might have previously made on their residence permit status. Doing this would go a long way towards undoing the mess of her predecessors at the Ministry of Interior and it would speak volumes about how much Finnish society has evolved from the days when any foreigner was a commie they had to push over the Eastern border.

13 January 2008

Andrew Pollock: [debian] Adding domain-search (option 119) support to DHCP

I've been a bit behind with the ISC DHCP v3 package of late. I've had a package of 3.1.0 for a few months, but I'm becoming more and more reluctant to make uploads of this package without giving it some rudimentary testing. Real life has been demanding of late, so I haven't had a chance to do that testing. Nothing's gone wrong with the package to date, I just feel negligent not giving it some basic testing before throwing it out there. So I chucked a spare QFE card in my workstation brutus, put unstable on the second hard drive, installed Debian unstable on the spare 10G partition of Sarah's old PowerBook, and lashed the two together with a bit of CAT-5, and voila, instant DHCP client/server testbed. So now that I'd verified that version 3.1.0 wasn't blatantly broken, I could have a fiddle with the new support for option 119 (domain-search) that has people at work very excited. I spent a bit of time off on a wild goose chase trying to figure out how to configure the DHCP server for this new option before I realised it was as simple as option domain-search "foo bar baz"; and then made the necessary tweaks to dhclient-script. I'm figuring that since one of the blokes who wrote the RFC works at Apple, and since Leopard supports option 119, the way they're doing it in Leopard must be approximately correct, so I've altered the behaviour of the DHCP client accordingly. All of this assumes you're not using resolvconf. Prior to 3.1.0, if you had the domain-name option set, the /etc/resolv.conf created by dhclient-script would set the search directive to the domain name. With 3.1.0, I've decided to make this set the domain option instead. The resolver behaviour remains the same, as best as I can determine. With 3.1.0, if the domain-search option is set, then the search directive is set to this. If the domain-name option is set, this is prepended to the list of domains in the domain-search option. This is consistent with what MacOS X 10.5 (Leopard) does. I'm not sure what Windows does (nor what version started honouring the domain-search DHCP option)), and since it doesn't have an /etc/resolv.conf equivalent, it's a bit hard to test. I'm going to hazard a guess and hope that since the other bloke who wrote the RFC works for Microsoft, that maybe, just maybe, Windows and MacOS X at least behave the same way. Currently, if resolvconf is in use, you get the old behaviour whereby the search directive is populated with the contents of the domain-name option, and the domain-search option is ignored completely. I've filed #460609 with a patch to fix this. Interestingly, resolvconf itself internally collapses the domain and search directives into the search directive of the /etc/resolv.conf it generates, so I can't quite get the same /etc/resolv.conf generated as would be generated without resolvconf being used. No big deal, the behaviour should be the same. The domain directive is largely unnecessary as I understand it. I have unearthed what I consider to be a bug with the domain-search option handling code in the client. It seems to be largely benign, apart from causing the client to emit some errors when it's obtaining a lease that contains the domain-search option, and possibly not splitting the domains in the domain-search option correctly. There's also another (I think unrelated) warning that gets emitted as well. I'm waiting to hear back from upstream about both. I'm not going to package the recently released 4.0 DHCP until I've figured out a transition plan to go from the old DHCP v2 packages to the v3 ones.

15 May 2007

Benjamin Mako Hill: Selectricity

/copyrighteous/images/selectricity_logo.png
More than a year ago, I published an election methods library called RubyVote. Interest in the library surpassed any of my expectations: I know of at least one startup using the library heavily in their core business and a number of fun sites, like Red Blue Smackdown, that are using it as well. The point of course, was to make complex but superior election methods accessible in all sorts of places where people were making decisions suboptimally. It its own small way, it seems to have succeeded enormously. Over the last year, I've been asked by a variety of people if they could use RubyVote for their own organizational decision making -- tasks like electing leadership of a student group or members of a non-profit board of directors. Since RubyVote was just a library without a UI of its own, I had to tell them "no." I caved in eventually and got to work on a quick and dirty web-based front end to the library. That project grew into Selectricity which is a primarily web-based interface to a variety of different election methods and voting technologies. You can currently try out quickvotes which can be created in half a minute and voted on in a quarter but which bring all of the power of preferential voting technologies to bear on very simple decisions. Prompted by Aaron Swartz, I also built a mobile phone version that's lets you send a short email or SMS to create or vote in a election. For those that follow research in voting technologies, there's not a lot of new stuff here. What's new is that this project, unlike the vast majority of voting technologies, is interested in the state of the art for everyone but governments. Clearly government decisions are important but they're one set of decisions, usually only once a year. Selectricity is voting machinery for everything and everyone else. It was announced in a variety of news outlets today that Selectricity was selected for grant from mtvU and Cisco as part of their Digital Incubator project. As part of that, I'm going to be working with some other voting technology experts to bring tools for auditable elections, cryptographically secured anonymity, and voter verifiability to the platform (I have only rudimentary functionality today). There are a couple people who will be joining me on the project this summer and we will building out what I hope will be an extremely attractive platform for better every-day decision-making. More than the grant though, I'm excited about the visibility that use by MTV will bring to the project. Most of all though, I'm just excited about more free software and more (and more accessible) democratic decision making. My adviser Chris Csikszentmih lyi put it well:
One of the big arguments against preferential voting, or new voting technologies, is the fear that they would disenfranchise the average person who doesn't yet understand how they work. Certainly, making all voting technologies open source is critical, but the issue of familiarity is worth considering. We re hoping that MTV and eventually American Idol will move their voting over to Selectricity, allowing it to work as both a technical tool but also pedagogically, training future voters. Why not integrate democratic processes into all your software and communications tools? Why not use the best democratic processes available, so long as they're available to everyone?

11 May 2007

Robert McQueen: Tubes and Planets

Daf blogged earlier about some of the work we’ve done thus far for the One Laptop Per Child project. Tubes (although the picture looks more like snakes if you ask me :D) are a really cool technology which should let the OLPC activity authors just work on their activity, and use D-Bus and Telepathy to take care of the communications. At the moment our implementation for Gabble (Telepathy’s XMPP backend) is pretty rudimentary and sends all data via the server, but this already lets us layer multi-user tubes over XMPP multi-user chat rooms and have it act like a bus where each member of the room is also a D-Bus endpoint. You can export objects, call methods and emit signals just as normal. Next up we’re going to implement them in Salut (the link-local XMPP backend, which we’ll use for communications over OLPC’s mesh networking) using good old TCP for the one-to-one connections, and some of Sjoerd’s more exciting link-local multicast stuff for multi-user tubes. To make tubes work for desktop clients we’re going to go on and look at more advanced Jingle-based ICE NAT traversal stuff. Maybe one of our next ports of call should be raw stream tubes for existing TCP protocols, then we can make a reality from X over Jabber (or whatever other protocol) that Matthew Allum was wondering about. :) I’ve also just stolen Planet Collabora from Daf’s home directory and put it on its own subdomain, so you can add it to your feed readers and keep track of what we’re up to with Telepathy, Farsight and friends.

Next.

Previous.